The Back-Prop and No-Prop Training Algorithms
نویسندگان
چکیده
Back-Prop and No-Prop, two training algorithms for multi-layer neural networks, are compared in design and performance. With Back-Prop, all layers of the network receive least squares training. With No-Prop, only the output layer receives least squares training, whereas the hidden layer weights are chosen randomly and then fixed. No-Prop is much simpler than Back-Prop. No-Prop can deliver equal performance to BackProp when the number of training patterns is less than or equal to the number of neurons in the final hidden layer When the number of training patterns is increased beyond this, the performance of Back-Prop can be slightly better than that of No-Prop. However, the performance of No-Prop can be made equal to or better than the performance of Back-Prop by increasing the number of neurons in the final hidden layer. These algorithms are compared with respect to training time, minimum mean square error for the training patterns, and classification accuracy for the testing patterns. These algorithms are applied to pattern classification and nonlinear adaptive filtering.
منابع مشابه
The No-Prop algorithm: A new learning algorithm for multilayer neural networks
A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introd...
متن کاملTwo Strategies Based on Meta-Heuristic Algorithms for Parallel Row Ordering Problem (PROP)
Proper arrangement of facility layout is a key issue in management that influences efficiency and the profitability of the manufacturing systems. Parallel Row Ordering Problem (PROP) is a special case of facility layout problem and consists of looking for the best location of n facilities while similar facilities (facilities which has some characteristics in common) should be arranged in a row ...
متن کاملG-Prop: Global optimization of multilayer perceptrons using GAs
A general problem in model selection is to obtain the right parameters that make a model "t observed data. For a multilayer perceptron (MLP) trained with back-propagation (BP), this means "nding appropiate layer size and initial weights. This paper proposes a method (G-Prop, genetic backpropagation) that attempts to solve that problem by combining a genetic algorithm (GA) and BP to train MLPs w...
متن کاملMbp on T0: Mixing Oating-and Xed-point Formats in Bp Learning
We examine the eecient implementation of back prop type algorithms on T0 4], a vector processor with a xed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop 1], has been shown to be very eecient on some RISCs 2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward ph...
متن کاملSynthesis of 1-Hydroxy-2-(Prop-2'-Enyl) 9-Antrone
An efficient synthesis of the 1-hydroxy-2-(prop-2'-enyl)9-anthrone is described. Selective nitration of anthraquinone, reduction to the corresponding amine, diazotization and treatment by sulfuric acid solution afforded the 1-hydroxy-9,10-anthraquinone in good yield as the key intermediate. Reaction with allyl bromide/K2CO3 and subsequent selective reduction accompanie...
متن کامل